Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Route planning method of UAV swarm based on dynamic cluster particle swarm optimization
Longbao WANG, Yinqi LUAN, Liang XU, Xin ZENG, Shuai ZHANG, Shufang XU
Journal of Computer Applications    2023, 43 (12): 3816-3823.   DOI: 10.11772/j.issn.1001-9081.2022111763
Abstract187)   HTML7)    PDF (2693KB)(203)       Save

Route planning is very important for the task execution of Unmanned Aerial Vehicle (UAV) swarm, and the computation is usually complex in high dimensional scenarios. Swarm intelligence has provided a good solution for this problem. Particle Swarm Optimization (PSO) algorithm is especially suitable for route planning problem because of its advantages such as few parameters, fast convergence and simple operation. However, PSO algorithm has poor global search ability and is easy to fall into local optimum when applied to route planning. In order to solve the problems above and improve the effect of UAV swarm route planning, a Dynamic Cluster Particle Swarm Optimization (DCPSO) algorithm was proposed. Firstly, artificial potential field method and receding horizon control principle were used to model the task scenario of route planning problem of UAV swarm. Secondly, Tent chaotic map and dynamic cluster mechanism were introduced to further improve the global search ability and search accuracy. Finally, DCPSO algorithm was used to optimize the objective function of the model to obtain each trajectory point selection of UAV swarm. On 10 benchmark functions with different combinations of unimodal/multimodal and low-dimension/high-dimension, simulation experiments were carried out. The results show that compared with PSO algorithm, Pigeon-Inspired Optimization (PIO), Sparrow Search Algorithm (SSA) and Chaotic Disturbance Pigeon-Inspired Optimization (CDPIO) algorithm, DCPSO algorithm has better optimal value, mean value and variance, better search accuracy and stronger stability. Besides, the performance and effect of DCPSO algorithm were demonstrated in the route planning application instances of UAV swarm simulation experiments.

Table and Figures | Reference | Related Articles | Metrics
Zero-shot relation extraction model via multi-template fusion in Prompt
Liang XU, Chun ZHANG, Ning ZHANG, Xuetao TIAN
Journal of Computer Applications    2023, 43 (12): 3668-3675.   DOI: 10.11772/j.issn.1001-9081.2022121869
Abstract392)   HTML31)    PDF (1768KB)(255)       Save

Prompt paradigm is widely used to zero-shot Natural Language Processing (NLP) tasks. However, the existing zero-shot Relation Extraction (RE) model based on Prompt paradigm suffers from the difficulty of constructing answer space mappings and dependence on manual template selection, which leads to suboptimal performance. To address these issues, a zero-shot RE model via multi-template fusion in Prompt was proposed. Firstly, the zero-shot RE task was defined as the Masked Language Model (MLM) task, where the construction of answer space mapping was abandoned. Instead, the words output by the template were compared with the relation description text in the word embedding space to determine the relation class. Then, the part of speech of the relation description text was introduced as a feature, and the weight between this feature and each template was learned. Finally, this weight was utilized to fuse the results output by multiple templates, thereby reducing the performance loss caused by the manual selection of Prompt templates. Experimental results on FewRel (Few-shot Relation extraction dataset) and TACRED (Text Analysis Conference Relation Extraction Dataset) show that, the proposed model significantly outperforms the current state-of-the-art model, RelationPrompt, in terms of F1 score under different data resource settings, with an increase of 1.48 to 19.84 percentage points and 15.27 to 15.75 percentage points, respectively. These results convincingly demonstrate the effectiveness of the proposed model for zero-shot RE tasks.

Table and Figures | Reference | Related Articles | Metrics
Pixel classification-based multiscale UAV aerial object rotational tracking algorithm
Yuanliang XUE, Guodong JIN, Lining TAN, Jiankun XU
Journal of Computer Applications    2022, 42 (7): 2239-2247.   DOI: 10.11772/j.issn.1001-9081.2021040689
Abstract287)   HTML15)    PDF (4732KB)(64)       Save

A pixel classification-based multiscale Unmanned Aerial Vehicle (UAV) aerial object rotational tracking algorithm was proposed for the UAV tracking process, in which the vertical tracking box limited the tracking accuracy when dealing with scale changes, similar objects and aspect ratio changes. Firstly, MS-ResNet (MultiScale ResNet-50) was designed to extract multiscale features of the object. Then, a pixel binary classification module was designed on the multi-channel response map with orthogonal characteristics to further refine the results of classification and regression branches accurately. Meanwhile, to improve the pixel classification accuracy, the concurrent spatial and channel “Squeeze & Excitation” (scSE) module was used to filter the object features in the spatial and channel domains. Finally, a rotational tracking box fitting the actual size of the object was generated based on pixel classification to avoid the contamination of positive samples. Experimental results show that the proposed algorithm has the success rate and precision on the UAV tracking dataset UAV123 of 60.7% and 79.5% respectively, which are 5 percentage points and 2.7 percentage points higher than those of Siamese Region Proposal Network (SiamRPN) respectively, and has the speed reached 67.5 FPS, meeting the real-time requirements. The proposed algorithm has good scale adaptation, discrimination ability and robustness, and can effectively cope with UAV tracking tasks.

Table and Figures | Reference | Related Articles | Metrics
Cross-domain person re-identification method based on attention mechanism with learning intra-domain variance
Daili CHEN, Guoliang XU
Journal of Computer Applications    2022, 42 (5): 1391-1397.   DOI: 10.11772/j.issn.1001-9081.2021030459
Abstract328)   HTML15)    PDF (2210KB)(277)       Save

To solve severe performance degradation problem of person re-identification task during cross-domain migration, a new cross-domain person re-identification method based on attention mechanism with learning intra-domain variance was proposed. Firstly, ResNet50 was used as the backbone network and some modifications were made to it, so that it was more suitable for person re-identification task. And Instance-Batch Normalization Network (IBN-Net) was introduced to improve the generalization ability of model. At the same time, for the purpose of learning more discriminative features, a region attention branch was added to the backbone network. For the training of source domain, it was treated as a classification task. Cross-entropy loss was utilized for supervised learning of source domain, and triplet loss was introduced to mine the details of source domain samples and improve the classification performance of source domain. For the training of target domain, intra-domain variance was considered to adapt the difference in data distribution between the source domain and the target domain. In the test phase, the output of ResNet50 pool-5 layer was used as image features, and Euclidean distance between query image and candidate image was calculated to measure the similarity of them. In the experiments on two large-scale public datasets of Market-1501 and DukeMTMC-reID, the Rank-1 accuracy of the proposed method is 80.1% and 67.7% respectively, and its mean Average Precision (mAP) is 49.5% and 44.2% respectively. Experimental results show that, the proposed method has better performance in improving generalization ability of model.

Table and Figures | Reference | Related Articles | Metrics
Intelligent layout optimization algorithm for 3D pipelines of ships
XIONG Yong, ZHANG Jia, YU Jiajun, ZHANG Benren, LIANG Xuanzhuo, ZHU Qige
Journal of Computer Applications    2020, 40 (7): 2164-2170.   DOI: 10.11772/j.issn.1001-9081.2020010075
Abstract793)      PDF (1094KB)(642)       Save
In the ship pipeline layout at three-dimensional environment, aiming at the problems that there are too many constraints, the engineering rules are difficult to quantify and the appropriate optimization evaluation function is hard to determine, a new ship pipeline automatic layout method was proposed. Firstly, the hull and ship equipments were simplified by the Aixe Align Bounding Box (AABB) method, which means that they were discretized into space nodes, and the initial pheromones and energy values of them were given, the obstacles in the space were marked, and the specific quantitative forms for the main pipe-laying rules were given. Secondly, with the combination of Rapidly-exploring Random Tree (RRT) algorithm and Ant Colony Optimization (ACO) algorithm, the direction selection strategy, obstacle avoidance strategy and variable step strategy were introduced to improve the search efficiency and success rate of the algorithm, and then the ACO algorithm was used to optimize the path iteratively by establishing the optimization evaluation function, so as to obtain the comprehensive optimal solution that meets the engineering rules. Finally, the computer simulated cabin space layout environment was used to carry out automatic pipe-laying simulation experiments, which verified the effectiveness and practicability of the proposed method.
Reference | Related Articles | Metrics
Bridge crack classification and measurement method based on deep convolutional neural network
LIANG Xuehui, CHENG Yunze, ZHANG Ruijie, ZHAO Fei
Journal of Computer Applications    2020, 40 (4): 1056-1061.   DOI: 10.11772/j.issn.1001-9081.2019091546
Abstract751)      PDF (1043KB)(722)       Save
In order to improve the detection level of bridge cracks,and solve the time-consuming and laborious problem in manual detection and the parameters to be set manually in traditional image processing methods,an improved bridge crack detection algorithm was proposed based on GoogLeNet. Firstly,a large-scale bridge crack Retinex-Laplace-Histogram equalization(RLH)dataset was constructed for model training and testing. Secondly,based on the original GoogLeNet model,the inception module was improved by using the normalized convolution kernel,three improved schemes were used to modify the beginning of the network,the seventh and later inception layers were removed,and a bridge crack feature image classification system was established. Finally,the sliding window was used to accurately locate the cracks and the lengths and widths of the cracks were calculated by the skeleton extraction algorithm. The experimental results show that compared with the original GoogLeNet network,the improve-GoogLeNet network increased the recognition accuracy by 3. 13%, and decreased the training time to the 64. 6% of the original one. In addition,the skeleton extraction algorithm can consider the trend of the crack,calculate the width more accurately,and the maximum width and the average width can be calculated. In summary,the classification and measurement method proposed in this paper have the characteristics of high accuracy,fast speed,accurate positioning and accurate measurement.
Reference | Related Articles | Metrics
Feature selection based on statistical random forest algorithm
SONG Yuan, LIANG Xuechun, ZHANG Ran
Journal of Computer Applications    2015, 35 (5): 1459-1461.   DOI: 10.11772/j.issn.1001-9081.2015.05.1459
Abstract1304)      PDF (569KB)(967)       Save

Focused on the traditional methods of feature selection for brain functional connectivity matrix derived from Resting-state functional Magnetic Resonance Imaging (R-fMRI) have feature redundancy, cannot determine the final feature dimension and other problems, a new feature selection algorithm was proposed. The algorithm combined Random Forest (RF) algorithm in statistical method, and applied it in the identification experiment of schizophrenic and normal patients, according to the features are obtained by the classification results of out of bag data. The experimental results show that compared to the traditional Principal Component Analysis (PCA), the proposed algorithm can effectively retain important features to improve recognition accuracy, which have good medical explanation.

Reference | Related Articles | Metrics
Applications of unbalanced data classification based on optimized support vector machine ensemble classifier
ZHANG Shaoping, LIANG Xuechun
Journal of Computer Applications    2015, 35 (5): 1306-1309.   DOI: 10.11772/j.issn.1001-9081.2015.05.1306
Abstract588)      PDF (588KB)(679)       Save

The traditional classification algorithms are mostly based on balanced datasets. But when the sample is not balanced, the performance of these learning algorithms are often significantly decreased. For the classification of imbalanced data, a optimized Support Vector Machine (SVM) ensemble classifier model was proposed. Firstly, the model used KSMOTE and Bootstrap to preprocess the imbalanced data and paralleled to generate the corresponding SVM models. And then these SVM models' parameters were optimized by using complex method. At last the optimized SVM ensemble classifier model was generated by the above parameters and produce the final result by voting mechanism. Through the experiment on 5 groups of UCI standard data set, the experimental results show that the optimized SVM ensemble classifier model has higher classification accuracy than SVM model, optimized SVM model and so on. And the results also verify the effect of different bootNum values on the optimized SVM ensemble classifier.

Reference | Related Articles | Metrics
Novel validity index for fuzzy clustering
ZHENG Hongliang XU Benqiang ZHAO Xiaohui ZOU Li
Journal of Computer Applications    2014, 34 (8): 2166-2169.   DOI: 10.11772/j.issn.1001-9081.2014.08.2166
Abstract265)      PDF (582KB)(305)       Save

It is necessary to pre-define a cluster number in classical Fuzzy C-means (FCM) algorithm. Otherwise, FCM algorithm can not work normally, which limits the applications of this algorithm. Aiming at the problem of pre-assigning cluster number for FCM algorithm, a new fuzzy cluster validity index was presented. Firstly, the membership matrix was got by running the FCM algorithm. Secondly, the intra class compactness and the inter class overlap were computed by the membership matrix. Finally, a new cluster validity index was defined by using the intra class compactness and the inter class overlap. The proposal overcomes the shortcomings of FCM that the cluster number must be pre-assigned. The optimal cluster number can be effectively found by the proposed index. The experimental results on artificial and real data sets show the validity of the proposed index. It also can be seen that the optimal cluster number are obtained for three different fuzzy factor values of 1.8, 2.0 and 2.2 which are general used in FCM algorithm.

Reference | Related Articles | Metrics
Multivariate linear regression forecasting model based on MapReduce
DAI Liang XU Hongke CHEN Ting QIAN Chao LIANG Dianpeng
Journal of Computer Applications    2014, 34 (7): 1862-1866.   DOI: 10.11772/j.issn.1001-9081.2014.07.1862
Abstract216)      PDF (730KB)(613)       Save

According to the characteristics of traditional multivariate linear regression method for long processing time and limited memory, a parallel multivariate linear regression forecasting model was designed based on MapReduce for the time-series sample data. The model was composed of three MapReduce processes which were used to solve the eigenvector and standard orthogonal vector of cross product matrix composed by historical data, to forecast the future parameter of the eigenvalues and eigenvectors matrix, and to estimate the regression parameters in the next moment respectively. Experiments were designed and implemented to the validity effectiveness of the proposed parallel multivariate linear regression forecasting model. The experimental results show multivariate linear regression prediction model based on MapReduce has good speedup and scaleup, and suits for analysis and forecasting of large data.

Reference | Related Articles | Metrics
Hardware/software partitioning based on greedy algorithm and simulated annealing algorithm
ZHANG Liang XU ChengCheng TIAN Zheng LI Tao
Journal of Computer Applications    2013, 33 (07): 1898-1902.   DOI: 10.11772/j.issn.1001-9081.2013.07.1898
Abstract874)      PDF (769KB)(588)       Save
Hardware/Software (HW/SW) partitioning is one of the crucial steps in the co-design of embedded system, and it has been proven to be a NP problem. Considering that the latest work has slow convergence speed and poor solution quality, the authors proposed a HW/SW partitioning method based on greedy algorithm and Simulated Annealing (SA) algorithm. This method reduced the HW/SW partitioning problem to the extended 0-1 knapsack problem, and used the greedy algorithm to do the initial rapid partition; then divided the solution space reasonably and designed a new cost function and used the improved SA algorithm to search for the global optimal solution. Compared to the existing improved algorithms, the experimental results show that the new algorithm is more effective and practical in terms of the quality of partitioning and the running time, and the promotion proportions are 8% and 17% respectively.
Reference | Related Articles | Metrics
Application of nonlinear feature extraction and least square support vector machines for fault diagnosis of chemical process
Liang XU
Journal of Computer Applications    2010, 30 (1): 236-239.  
Abstract1549)      PDF (638KB)(1018)       Save
The nonlinear feature extraction (Kernel Principal Component Analysis(KPCA) and kernel Independent Component Analysis (ICA)) was used to eliminate the uncorrelated component from input data, and reduce dimension in the paper. Kernel principal component analysis adopted a kernel function to map input data into feature space and calculate linear PCA. Kernel independent component analysis extracted independent component by linear ICA transformation in the KPCA whitened space. The extracted features were taken as the input of Least Square Support Vector Machine (LSSVM) classifier. Incorporating nonlinear feature extraction into LSSVM served as a new method for intelligent fault diagnosis. The proposed method was applied to the fault diagnosis of lubricating oil process for a petro-plant. The effectiveness of the proposed method is verified.
Related Articles | Metrics
Sealed-bid electronic auction scheme based on new group signature
Xu Qiu-liang Xu
Journal of Computer Applications   
Abstract1512)      PDF (577KB)(899)       Save
With the help of a group signature based on Hierarchical Identity-Based Signature (HIBS) and bilinear map, a new secure and efficient sealed-bid electronic auction scheme was presented. The scheme not only fulfilled the common security requirements, such as the anonymity of bidders, robustness, non-repudiation of the winner and public verifiability, but also had some other characteristics: compendiary steps, convenient construction, low cost of communications and computation, etc. Meanwhile, the scheme realized simple revocation, one-registration, and many times bidding.
Related Articles | Metrics
Identity based broadcast encryption without pairings
Zhang Xinfang qiuliang xu
Journal of Computer Applications   
Abstract1453)      PDF (459KB)(1063)       Save
Identity Based Encryption(IBE) schemes and Identity Based Broadcast Encryption(IBBE) schemes are often constructed by using bilinear maps (a.k.a. parings) on elliptic curves. In this paper, an Identity Based Broadcast Encryption scheme without pairings was given. It is secure in Random Oracle according to the Quadratic Residuosity assumption.
Related Articles | Metrics